Goto

Collaborating Authors

 non-robust pac learning


Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Neural Information Processing Systems

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C using any non-robust learner A for C. The number of calls to A depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.


Review for NeurIPS paper: Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Neural Information Processing Systems

Summary and Contributions: Robust learning has got a lot of attention over the last decade. This paper is about *test-time attacks (called evasion attacks or attacks finding "adversarial examples"). Such attacks are studied widely from experimental point of view, but more recently theoretical results are being proven for understanding how, and when, learning under such adversarial perturbations are possible. Montasser et al (MHS COLT 19) showed that if a problem is PAC learnable, i.e., has finite VC dimension d, then it is also robustly PAC learnable, though the sample complexity (in their proof) could be as large as exponential(d). This paper's main contribution is an alternative proof of the result of MHS, by showing how to reduce the task of robust learning to the task of normal PAC learning.


Review for NeurIPS paper: Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Neural Information Processing Systems

All the reviewers agreed that the paper provides a novel and important theoretical result. Specifically, one of the main contributions of the paper is to show that if a problem is PAC learnable, i.e., has finite VC dimension d, then it i also robustly PAC learnable. The result had already been proven last year, however, this paper provides a novel framework to prove it by showing how to reduce the task of robust learning to the task of normal PAC learning (it should be noted that the reduction may not be efficient (i.e., polynomial time) but still is important from the theoretical and statistical perspective. The reviewers also had some suggestions for the revised version of the paper which are reflected in the'r updated reviews.


Reducing Adversarially Robust Learning to Non-Robust PAC Learning

Neural Information Processing Systems

We study the problem of reducing adversarially robust learning to standard PAC learning, i.e. the complexity of learning adversarially robust predictors using access to only a black-box non-robust learner. We give a reduction that can robustly learn any hypothesis class C using any non-robust learner A for C. The number of calls to A depends logarithmically on the number of allowed adversarial perturbations per example, and we give a lower bound showing this is unavoidable.